45 research outputs found

    Blur aware metric depth estimation with multi-focus plenoptic cameras

    Full text link
    While a traditional camera only captures one point of view of a scene, a plenoptic or light-field camera, is able to capture spatial and angular information in a single snapshot, enabling depth estimation from a single acquisition. In this paper, we present a new metric depth estimation algorithm using only raw images from a multi-focus plenoptic camera. The proposed approach is especially suited for the multi-focus configuration where several micro-lenses with different focal lengths are used. The main goal of our blur aware depth estimation (BLADE) approach is to improve disparity estimation for defocus stereo images by integrating both correspondence and defocus cues. We thus leverage blur information where it was previously considered a drawback. We explicitly derive an inverse projection model including the defocus blur providing depth estimates up to a scale factor. A method to calibrate the inverse model is then proposed. We thus take into account depth scaling to achieve precise and accurate metric depth estimates. Our results show that introducing defocus cues improves the depth estimation. We demonstrate the effectiveness of our framework and depth scaling calibration on relative depth estimation setups and on real-world 3D complex scenes with ground truth acquired with a 3D lidar scanner.Comment: 21 pages, 12 Figures, 3 Table

    Comment calibrer extrinsèquement des caméras à champs non-recouvrants ? Application pour un robot mobile

    Get PDF
    National audienceMulti-camera systems are more and more used in visionbased robotics. An accurate extrinsic calibration (camera relative poses) is usually required. In most of cases, this task is done by matching features through different views of the same scene. However, if the camera fields of view do not overlap, such a matching procedure is not feasible anymore. This article deals with a simple and flexible extrinsic calibration method, for non-overlapping camera rig. The aim is the calibration of non-overlapping cameras embedded on a vehicle, for visual navigation purpose in urban environment. The cameras do not see the same area at the same time. The calibration procedure consists in manoeuvring the vehicle while each camera observes a static scene. Previously, the camera were intrinsically calibrated. The main contributions are a study of the singular motions and a specific bundle adjustment which both reconstructs the scene and calibrates the cameras. Solutions to handle the singular configurations, such as planar motions, are exposed. The proposed approach has been validated with synthetic and real data. This article is translated from [19].Les systèmes multi-caméras sont de plus en plus utilisés en robotique mobile. Il est souvent nécessaire que l'étalonnage extrinsèque (poses relatives des caméras) soit précis. Pour cela, on utilise généralement des appariements entre différentes vues, ce qui est impossible à réaliser si les champs de vue des caméras sont disjoints. Dans cet article, nous exposons une méthode simple et flexible pour étalonner extrinsèquement un système multicaméras dont les champs de vue sont disjoints. Le but est de calibrer des caméras embarquées sur un véhicule pour des applications de navigation en milieu urbain. Les caméras observent donc des régions différentes à un instant donné. La procédure d'étalonnage consiste à manoeuvrer le véhicule pendant que chaque caméra, intrinsèquement calibrée au préalable, observe une scène statique. Les principales contributions sont l'étude des mouvements singuliers, et un ajustement de faisceaux spécifique qui affine les scènes, les poses du système multi-caméras, et calibre extrinsèquement les caméras. Nous étudions comment traiter les mouvements singuliers, comme les mouvements plans. La méthode proposée est validée avec des données synthétiques et réelles. Traduction depuis l'anglais de l'article [19]

    Leveraging blur information for plenoptic camera calibration

    Full text link
    This paper presents a novel calibration algorithm for plenoptic cameras, especially the multi-focus configuration, where several types of micro-lenses are used, using raw images only. Current calibration methods rely on simplified projection models, use features from reconstructed images, or require separated calibrations for each type of micro-lens. In the multi-focus configuration, the same part of a scene will demonstrate different amounts of blur according to the micro-lens focal length. Usually, only micro-images with the smallest amount of blur are used. In order to exploit all available data, we propose to explicitly model the defocus blur in a new camera model with the help of our newly introduced Blur Aware Plenoptic (BAP) feature. First, it is used in a pre-calibration step that retrieves initial camera parameters, and second, to express a new cost function to be minimized in our single optimization process. Third, it is exploited to calibrate the relative blur between micro-images. It links the geometric blur, i.e., the blur circle, to the physical blur, i.e., the point spread function. Finally, we use the resulting blur profile to characterize the camera's depth of field. Quantitative evaluations in controlled environment on real-world data demonstrate the effectiveness of our calibrations.Comment: arXiv admin note: text overlap with arXiv:2004.0774

    Dynamic visual servoing from sequential regions of interest acquisition.: On behalf of: Multimedia Archives Dynamic visual servoing from sequential regions of interest acquisition.

    Get PDF
    International audienceOne of the main drawbacks of vision-based control that remains unsolved is the poor dynamic performances caused by the low acquisition frequency of the vision systems and the time latency due to processing. We propose in this paper to face the challenge of designing a high-performance dynamic visual servo control scheme. Two versatile control laws are developed in this paper: a position-based dynamic visual servoing and an image-based dynamic visual servoing. Both control laws are designed to compute the control torques exclusively from a sequential acquisition of regions of interest containing the visual features to achieve an accurate trajectory tracking. The presented experiments on vision-based dynamic control of a high-speed parallel robot show that the proposed control schemes can perform better than joint-based computed torque control

    Ajustement de faisceaux du SLAM revisité en utilisant un capteur RGB-D

    Get PDF
    International audienceNous présentons dans ce papier une méthode qui intègre l'information de profondeur fournie par un capteurRGB-D, pour la cartographie et la localisation simultanée ou (Simultanuous Localization And Mapping, SLAM) afind'améliorer la précision de la localisation. Nous présentons un nouvel ajustement de faisceaux local qui permet de combiner des données ayant une information de profondeur et des données visuelles dans une même fonction de coût totalement exprimée en pixels. L'approche proposée est évaluée sur des séquences de benchmark et comparée aux méthodes de l'état de l'art</p

    Pattern tracking and visual servoing for indoor mobile environment mapping and autonomous navigation

    No full text
    In this paper, an image-based framework for avigation of a mobile robot in an indoor environment is presented. The only sensor used is an embedded monocular vision system. The environment is autonomously mapped during a learning stage in order to locate the robot on-line. A suitable control law is designed to drive the robot in an image database. To address this issue, a Virtual nonHolonomic Vehicle (VNHV) attached to the image plane is defined

    Localisation référencée modèle d'un robot mobile d'intérieur

    No full text
    An incremental and absolute mobile robot self localization method +in a partially modelled indoor environment is presented. A wire frame representation of the environment is adopted. The notion of occlusion is taken into account using View Invariant Regions. A pin-hole model of the camera is obtained thanks to Zhang calibration method. The localization approach is composed of four steps : image acquisition frome robot current position and orientation, image feature extraction, 3-D/2-D feature matching and camera pose recovery. Two full perspective camera pose recovery methods using straight line correspondances and numerical optimisation technic are presented. Adaptation of these methods to mobile robotics context is defined. Finally, the crucial problem of matching image features to model features is achieved using an algorithm based on Interpretation Tree Search. Dimension of correspondance space is reduced using View Invariant Regions and the specific configuration of the robot. Two geometric constraints are used to efficiently prune the Interpretation Tree testing the local consistency. A new function is defined to test the global consistency and to select the best matching hypothesis.Le présent travail porte sur la localisation incrémentale et absolue d'un robot mobile dans un environnement d'intérieur partiellement modélisé en utilisant la vision monoculaire. L'environnement de navigation du robot est à base de primitives géométriques (segments). Il intègre la notion d'occultation grâce à un découpage de l'espace 2-D navigable en Régions d'Invariance Visuelle. Le modèle de caméra à perspective pleine est obtenu grâce au calibrage par la méthode de Zhang. L'approche adoptée est composée de quatre étapes : acquisition d'une image à partir de la position courante du robot, extraction des primitives observées, mise en correspondance des primitives de l'image avec celles du modèle et calcul de la position et de l'orientation de la caméra. Deux méthodes numériques de calcul de la position et de l'orientation de la caméra grâce à des correspondances de droites sont présentées est adaptées au cas spécifique de la robotique mobile. Enfin, un algorithme de mise en correspondance des segments de l'image avec ceux du modèle est défini. Il est basé sur la recherche dans un arbre d'interprétation. Les Régions d'Invariance Visuelle et la configuration du système sont utilisées pour réduire l'espace des correspondances. Des contraintes géométriques d'ordre un et deux sont définies pour assurer l'élagage rapide de l'arbre. Une nouvelle fonction de vérification de la cohérence globale permet de sélectionner l'hypothèse de correspondance la plus cohérente

    Image Sensor Technology

    No full text
    International audienceno abstrac

    Rolling Shutter Homography and its Applications

    No full text
    International audienceIn this article we study the adaptation of the concept of homography to Rolling Shutter (RS) images. This extension has never been clearly adressed despite the many roles played by the homography matrix in multi-view geometry. We first show that a direct point-to-point relationship on a RS pair can be expressed as a set of 3 to 8 atomic 3x3 matrices depending on the kinematic model used for the instantaneous-motion during image acquisition. We call this group of matrices the RS Homography. We then propose linear solvers for the computation of these matrices using point correspondences. Finally, we derive linear and closed form solutions for two famous problems in computer vision in the case of RS images: image stitching and plane-based relative pose computation. Extensive experiments with both synthetic and real data from public benchmarks show that the proposed methods outperform state-of-art techniques
    corecore